Goto

Collaborating Authors

 false negative


Robust Contrastive Multi-view Clustering against Dual Noisy Correspondence

Neural Information Processing Systems

Recently, contrastive multi-view clustering (MvC) has emerged as a promising avenue for analyzing data from heterogeneous sources, typically leveraging the off-the-shelf instances as positives and randomly sampled ones as negatives. In practice, however, this paradigm would unavoidably suffer from the Dual Noisy Correspondence (DNC) problem, where noise compromises the constructions of both positive and negative pairs.






Empowering Collaborative Filtering with Principled Adversarial Contrastive Loss

Neural Information Processing Systems

Contrastive Learning (CL) has achieved impressive performance in self-supervised learning tasks, showing superior generalization ability. Inspired by the success, adopting CL into collaborative filtering (CF) is prevailing in semi-supervised topK recommendations. The basic idea is to routinely conduct heuristic-based data augmentation and apply contrastive losses (e.g., InfoNCE) on the augmented views. Yet, some CF-tailored challenges make this adoption suboptimal, such as the issue of out-of-distribution, the risk of false negatives, and the nature of top-K evaluation. They necessitate the CL-based CF scheme to focus more on mining hard negatives and distinguishing false negatives from the vast unlabeled user-item interactions, for informative contrast signals. Worse still, there is limited understanding of contrastive loss in CF methods, especially w.r.t.


Democratizing ML for Enterprise Security: A Self-Sustained Attack Detection Framework

Momeni, Sadegh, Zhang, Ge, Huber, Birkett, Harkous, Hamza, Lipton, Sam, Seguin, Benoit, Pavlidis, Yanis

arXiv.org Artificial Intelligence

Abstract--Despite advancements in machine learning for security, rule-based detection remains prevalent in Security Operations Centers due to the resource intensiveness and skill gap associated with ML solutions. While traditional rule-based methods offer efficiency, their rigidity leads to high false positives or negatives and requires continuous manual maintenance. This paper proposes a novel, two-stage hybrid framework to democratize ML-based threat detection. The first stage employs intentionally loose Y ARA rules for coarse-grained filtering, optimized for high recall. T o overcome data scarcity, the system leverages Simula, a seedless synthetic data generation framework, enabling security analysts to create high-quality training datasets without extensive data science expertise or pre-labeled examples. A continuous feedback loop incorporates real-time investigation results to adaptively tune the ML model, preventing rule degradation. This proposed model with active learning has been rigorously tested for a prolonged time in a production environment spanning tens of thousands of systems. The system handles initial raw log volumes often reaching 250 billion events per day, significantly reducing them through filtering and ML inference to a handful of daily tickets for human investigation. Live experiments over an extended timeline demonstrate a general improvement in the model's precision over time due to the active learning feature. This approach offers a self-sustained, low-overhead, and low-maintenance solution, allowing security professionals to guide model learning as expert "teachers". Despite significant advancements in machine learning (ML) for security, traditional rule-based detection remains the predominant approach in enterprise security operations. This is evidenced by the low adoption rate of ML-based technologies in Security Operations Centers (SOC), with one study [1] finding that only 10% of participating SOCs utilized AI/ML security monitoring tools.


How to Learn a Star: Binary Classification with Starshaped Polyhedral Sets

Brandenburg, Marie-Charlotte, Jochemko, Katharina

arXiv.org Artificial Intelligence

We consider binary classification restricted to a class of continuous piecewise linear functions whose decision boundaries are (possibly nonconvex) starshaped polyhedral sets, supported on a fixed polyhedral simplicial fan. We investigate the expressivity of these function classes and describe the combinatorial and geometric structure of the loss landscape, most prominently the sublevel sets, for two loss-functions: the 0/1-loss (discrete loss) and a log-likelihood loss function. In particular, we give explicit bounds on the VC dimension of this model, and concretely describe the sublevel sets of the discrete loss as chambers in a hyperplane arrangement. For the log-likelihood loss, we give sufficient conditions for the optimum to be unique, and describe the geometry of the optimum when varying the rate parameter of the underlying exponential probability distribution.


A Set of Rules for Model Validation

Camacho, José

arXiv.org Machine Learning

The validation of a data-driven model is the process of asses sing the model's ability to generalize to new, unseen data in the population o f interest. This paper proposes a set of general rules for model validation. T hese rules are designed to help practitioners create reliable validation plans and report their results transparently. While no validation scheme is flawle ss, these rules can help practitioners ensure their strategy is sufficient for pr actical use, openly discuss any limitations of their validation strategy, and r eport clear, comparable performance metrics. Keywords: Validation, Cross-validation 1. Introduction Model validation is a fundamental task in all modern data-dr iven systems, whether they fall under the broad categories of Statistics, Machine Learning (ML), Artificial Intelligence (AI), or more specialized fiel ds like chemometrics. Validation has become a major focus for regulatory and stand ardization bodies, with key reports and standards highlighting the growing con cern for ensuring the trustworthiness and reliability of data-driven models: NIST AI Risk Management Framework (AI RMF 1.0, 2023): Publi shed by the U.S. Department of Commerce, this framework provides management techniques to address the risks and ensure the trustwor thiness of AI systems, with validation as a core component. The EU AI Act of 2024, landmark piece of EU legislation that c ategorizes AI systems by risk level, where validation is not defined as a b est practice but a legal requirement within the conformity assessment. The ISO/IEC TS 4213:2022, by the International Organizati on for Standardization (ISO), describes approaches and methods to ens ure the rele-Email address: josecamacho@ugr.es The IEEE P2841 -2022 is a recommended practice for the fram ework and process for deep learning evaluation.


Scaling Item-to-Standard Alignment with Large Language Models: Accuracy, Limits, and Solutions

Karimi-Malekabadi, Farzan, Razavi, Pooya, Powers, Sonya

arXiv.org Artificial Intelligence

As educational systems evolve, ensuring that assessment items remain aligned with content standards is essential for maintaining fairness and instructional relevance. Traditional human alignment reviews are accurate but slow and labor-intensive, especially across large item banks. This study examines whether Large Language Models (LLMs) can accelerate this process without sacrificing accuracy. Using over 12,000 item-skill pairs in grades K-5, we tested three LLMs (GPT-3.5 Turbo, GPT-4o-mini, and GPT-4o) across three tasks that mirror real-world challenges: identifying misaligned items, selecting the correct skill from the full set of standards, and narrowing candidate lists prior to classification. In Study 1, GPT-4o-mini correctly identified alignment status in approximately 83-94% of cases, including subtle misalignments. In Study 2, performance remained strong in mathematics but was lower for reading, where standards are more semantically overlapping. Study 3 demonstrated that pre-filtering candidate skills substantially improved results, with the correct skill appearing among the top five suggestions more than 95% of the time. These findings suggest that LLMs, particularly when paired with candidate filtering strategies, can significantly reduce the manual burden of item review while preserving alignment accuracy. We recommend the development of hybrid pipelines that combine LLM-based screening with human review in ambiguous cases, offering a scalable solution for ongoing item validation and instructional alignment.